Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add filters

Language
Document Type
Year range
1.
arxiv; 2024.
Preprint in English | PREPRINT-ARXIV | ID: ppzbmed-2402.03763v1

ABSTRACT

The kick-off of vaccination campaigns in Europe, starting in late December 2020, has been followed by the online spread of controversies and conspiracies surrounding vaccine validity and efficacy. We study Twitter discussions in three major European languages (Italian, German, and French) during the vaccination campaign. Moving beyond content analysis to explore the structural aspects of online discussions, our investigation includes an analysis of polarization and the potential formation of echo chambers, revealing nuanced behavioral and topical differences in user interactions across the analyzed countries. Notably, we identify strong anti- and pro-vaccine factions exhibiting heterogeneous temporal polarization patterns in different countries. Through a detailed examination of news-sharing sources, we uncover the widespread use of other media platforms like Telegram and YouTube for disseminating low-credibility information, indicating a concerning trend of diminishing news credibility over time. Our findings on Twitter discussions during the COVID-19 vaccination campaign in major European languages expose nuanced behavioral distinctions, revealing the profound impact of polarization and the emergence of distinct anti-vaccine and pro-vaccine advocates over time.


Subject(s)
COVID-19
2.
arxiv; 2024.
Preprint in English | PREPRINT-ARXIV | ID: ppzbmed-2401.08789v1

ABSTRACT

The COVID-19 pandemic has triggered profound societal changes, extending beyond its health impacts to the moralization of behaviors. Leveraging insights from moral psychology, this study delves into the moral fabric shaping online discussions surrounding COVID-19 over a span of nearly two years. Our investigation identifies four distinct user groups characterized by differences in morality, political ideology, and communication styles. We underscore the intricate relationship between moral differences and political ideologies, revealing a nuanced picture where moral orientations do not rigidly separate users politically. Furthermore, we uncover patterns of moral homophily within the social network, highlighting the existence of one potential moral echo chamber. Analyzing the moral themes embedded in messages, we observe that messages featuring moral foundations not typically favored by their authors, as well as those incorporating multiple moral foundations, resonate more effectively with out-group members. This research contributes valuable insights into the complex interplay between moral foundations, communication dynamics, and network structures on Twitter.


Subject(s)
COVID-19
3.
arxiv; 2023.
Preprint in English | PREPRINT-ARXIV | ID: ppzbmed-2304.02983v1

ABSTRACT

Detecting misinformation threads is crucial to guarantee a healthy environment on social media. We address the problem using the data set created during the COVID-19 pandemic. It contains cascades of tweets discussing information weakly labeled as reliable or unreliable, based on a previous evaluation of the information source. The models identifying unreliable threads usually rely on textual features. But reliability is not just what is said, but by whom and to whom. We additionally leverage on network information. Following the homophily principle, we hypothesize that users who interact are generally interested in similar topics and spreading similar kind of news, which in turn is generally reliable or not. We test several methods to learn representations of the social interactions within the cascades, combining them with deep neural language models in a Multi-Input (MI) framework. Keeping track of the sequence of the interactions during the time, we improve over previous state-of-the-art models.


Subject(s)
COVID-19 , Language Disorders
4.
arxiv; 2023.
Preprint in English | PREPRINT-ARXIV | ID: ppzbmed-2304.02800v1

ABSTRACT

The COVID-19 pandemic has intensified numerous social issues that warrant academic investigation. Although information dissemination has been extensively studied, the silenced voices and censored content also merit attention due to their role in mobilizing social movements. In this paper, we provide empirical evidence to explore the relationships among COVID-19 regulations, censorship, and protest through a series of social incidents occurred in China during 2022. We analyze the similarities and differences between censored articles and discussions on r/china\_irl, the most popular Chinese-speaking subreddit, and scrutinize the temporal dynamics of government censorship activities and their impact on user engagement within the subreddit. Furthermore, we examine users' linguistic patterns under the influence of a censorship-driven environment. Our findings reveal patterns in topic recurrence, the complex interplay between censorship activities, user subscription, and collective commenting behavior, as well as potential linguistic adaptation strategies to circumvent censorship. These insights hold significant implications for researchers interested in understanding the survival mechanisms of marginalized groups within censored information ecosystems.


Subject(s)
COVID-19
5.
arxiv; 2022.
Preprint in English | PREPRINT-ARXIV | ID: ppzbmed-2210.08786v4

ABSTRACT

The detection of state-sponsored trolls acting in information operations is an unsolved and critical challenge for the research community, with repercussions that go beyond the online realm. In this paper, we propose a novel AI-based solution for the detection of state-sponsored troll accounts, which consists of two steps. The first step aims at classifying trajectories of accounts' online activities as belonging to either a state-sponsored troll or to an organic user account. In the second step, we exploit the classified trajectories to compute a metric, namely "troll score", which allows us to quantify the extent to which an account behaves like a state-sponsored troll. As a study case, we consider the troll accounts involved in the Russian interference campaign during the 2016 US Presidential election, identified as Russian trolls by the US Congress. Experimental results show that our approach identifies accounts' trajectories with an AUC close to 99% and, accordingly, classify Russian trolls and organic users with an AUC of 90%. Finally, we evaluate whether the proposed solution can be generalized to different contexts (e.g., discussions about Covid-19) and generic misbehaving users, showing promising results that will be further expanded in our future endeavors.


Subject(s)
COVID-19
SELECTION OF CITATIONS
SEARCH DETAIL